Goto

Collaborating Authors

 robust and efficient transfer learning


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Neural Information Processing Systems

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.


Reviews: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Neural Information Processing Systems

Summary: This paper presents a new transfer learning approach using Bayesian Neural Network in MDPs. They are building on the existing framework of Hidden Parameter MDPs, and replace the Gaussian process with BNNs, thereby also modeling the joint uncertainty in the latent weights and the state space. Overall, this proposed approach is sound, well developed and seems to help scale the inference. The authors have also shown that it works well by applying it to multiple domains. The paper is extremely well written.


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Killian, Taylor W., Daulton, Samuel, Konidaris, George, Doshi-Velez, Finale

Neural Information Processing Systems

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics. Papers published at the Neural Information Processing Systems Conference.


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Killian, Taylor W. (Harvard University) | Konidaris, George (Brown University) | Doshi-Velez, Finale (Harvard University)

AAAI Conferences

An intriguing application of transfer learning emerges when tasks arise with similar, but not identical, dynamics. Hidden Parameter Markov Decision Processes (HiP-MDP) embed these tasks into a low-dimensional space; given the embedding parameters one can identify the MDP for a particular task. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modeled independently of the agent's state uncertainty, requiring an arduous training procedure. In this work, we apply a Gaussian Process latent variable model to jointly model the dynamics and the embedding, leading to a more elegant formulation, one that allows for better uncertainty quantification and thus more robust transfer.